24 research outputs found

    Learning, Computing, and Trustworthiness in Intelligent IoT Environments: Performance-Energy Tradeoffs

    Get PDF
    An Intelligent IoT Environment (iIoTe) is comprised of heterogeneous devices that can collaboratively execute semi-autonomous IoT applications, examples of which include highly automated manufacturing cells or autonomously interacting harvesting machines. Energy efficiency is key in such edge environments, since they are often based on an infrastructure that consists of wireless and battery-run devices, e.g., e-tractors, drones, Automated Guided Vehicle (AGV)s and robots. The total energy consumption draws contributions from multipleiIoTe technologies that enable edge computing and communication, distributed learning, as well as distributed ledgers and smart contracts. This paper provides a state-of-the-art overview of these technologies and illustrates their functionality and performance, with special attention to the tradeoff among resources, latency, privacy and energy consumption. Finally, the paper provides a vision for integrating these enabling technologies in energy-efficient iIoTe and a roadmap to address the open research challengesComment: Accepted for publication in IEEE Transactions on Green Communication and Networkin

    Efficient Simulation of the Outage Probability of Multihop Systems

    No full text

    Local stochastic ADMM for communication-efficient distributed learning

    No full text
    Abstract In this paper, we propose a communication-efficient alternating direction method of multipliers (ADMM)-based algorithm for solving a distributed learning problem in the stochastic non-convex setting. Our approach runs a few stochastic gradient descent (SGD) steps to solve the local problem at each worker instead of finding the exact/approximate solution as proposed by existing ADMM-based works. By doing so, the proposed framework strikes a good balance between the computation and communication costs. Extensive simulation results show that our algorithm significantly outperforms existing stochastic ADMM in terms of communication-efficiency, notably in the presence of non-independent and identically distributed (non-IID) data

    DR-DSGD:a distributionally robust decentralized learning algorithm over graphs

    No full text
    Abstract In this paper, we propose to solve a regularized distributionally robust learning problem in the decentralized setting, taking into account the data distribution shift. By adding a Kullback-Liebler regularization function to the robust min-max optimization problem, the learning problem can be reduced to a modified robust minimization problem and solved efficiently. Leveraging the newly formulated optimization problem, we propose a robust version of Decentralized Stochastic Gradient Descent (DSGD), coined Distributionally Robust Decentralized Stochastic Gradient Descent (DR-DSGD). Under some mild assumptions and provided that the regularization parameter is larger than one, we theoretically prove that DR-DSGD achieves a convergence rate of O(1/√KT + K/T), where K is the number of devices and T is the number of iterations. Simulation results show that our proposed algorithm can improve the worst distribution test accuracy by up to 10%. Moreover, DR-DSGD is more communication-efficient than DSGD since it requires fewer communication rounds (up to 20 times less) to achieve the same worst distribution test accuracy target. Furthermore, the conducted experiments reveal that DR-DSGD results in a fairer performance across devices in terms of test accuracy
    corecore